Variance-Reduced and Projection-Free Stochastic Optimization

نویسندگان

  • Elad Hazan
  • Haipeng Luo
چکیده

The Frank-Wolfe optimization algorithm has recently regained popularity for machine learning applications due to its projection-free property and its ability to handle structured constraints. However, in the stochastic learning setting, it is still relatively understudied compared to the gradient descent counterpart. In this work, leveraging a recent variance reduction technique, we propose two stochastic Frank-Wolfe variants which substantially improve previous results in terms of the number of stochastic gradient evaluations needed to achieve 1 − accuracy. For example, we improve from O( 1 ) to O(ln 1 ) if the objective function is smooth and strongly convex, and from O( 1 2 ) to O( 1 1.5 ) if the objective function is smooth and Lipschitz. The theoretical improvement is also observed in experiments on real-world datasets for a multiclass classification application.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Stochastic Variance Reduction for Policy Gradient Estimation

Recent advances in policy gradient methods and deep learning have demonstrated their applicability for complex reinforcement learning problems. However, the variance of the performance gradient estimates obtained from the simulation is often excessive, leading to poor sample efficiency. In this paper, we apply the stochastic variance reduced gradient descent (SVRG) technique [1] to model-free p...

متن کامل

Adaptive Reduced-Rank LCMV Beamforming Algorithms Based on Joint Iterative Optimization of Filters: Design and Analysis

This paper presents reduced-rank linearly constrained minimum variance (LCMV) beamforming algorithms based on joint iterative optimization of filters. The proposed reduced-rank scheme is based on a constrained joint iterative optimization of filters according to the minimum variance criterion. The proposed optimization procedure adjusts the parameters of a projection matrix and an adaptive redu...

متن کامل

An Improved Spsa Algorithm for Stochastic Optimization with Bound Constraints

We show that the Simultaneous Perturbation Stochastic Approximation (SPSA) algorithm with projection may exhibit slow convergence in constrained stochastic optimization problems when the optimum is situated on the constraints. The cause of the slow convergence is a geometric interaction between the projection operator and the SPSA gradient estimate. The effect of this interaction can be describ...

متن کامل

Stochastic Variance-Reduced Cubic Regularized Newton Method

We propose a stochastic variance-reduced cubic regularized Newton method for non-convex optimization. At the core of our algorithm is a novel semi-stochastic gradient along with a semi-stochastic Hessian, which are specifically designed for cubic regularization method. We show that our algorithm is guaranteed to converge to an ( , √ )-approximately local minimum within Õ(n/ ) second-order oracl...

متن کامل

Improved Optimization of Finite Sums with Minibatch Stochastic Variance Reduced Proximal Iterations

We present novel minibatch stochastic optimization methods for empirical risk minimization problems, the methods efficiently leverage variance reduced first-order and sub-sampled higherorder information to accelerate the convergence speed. For quadratic objectives, we prove improved iteration complexity over state-of-the-art under reasonable assumptions. We also provide empirical evidence of th...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016